言语分离的许多最近进步主要针对具有高重叠程度的短音频话语的合成混合物。这些数据集与真实的会话数据显着不同,因此,在这些数据集上培训和评估的模型不会概括到真实的会话方案。使用大多数这些模型用于长形式语音的另一个问题是由于时间频率掩模或置换不变训练(PIT)损耗的无监督聚类,因此是分离的语音段的非明确顺序。这导致准确地缝合用于自动语音识别(ASR)的下游任务的均匀扬声器段。在本文中,我们提出了一种扬声器调节分离器,在直接从混合信号中提取的扬声器嵌入物上训练。我们使用定向丢失训练此模型,该丢失调节分离的段的顺序。使用此模型,我们对真实会话数据的单词错误率(WER)进行了重大改进,而无需额外的重新拼接步骤。
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
Long-term OCR services aim to provide high-quality output to their users at competitive costs. It is essential to upgrade the models because of the complex data loaded by the users. The service providers encourage the users who provide data where the OCR model fails by rewarding them based on data complexity, readability, and available budget. Hitherto, the OCR works include preparing the models on standard datasets without considering the end-users. We propose a strategy of consistently upgrading an existing Handwritten Hindi OCR model three times on the dataset of 15 users. We fix the budget of 4 users for each iteration. For the first iteration, the model directly trains on the dataset from the first four users. For the rest iteration, all remaining users write a page each, which service providers later analyze to select the 4 (new) best users based on the quality of predictions on the human-readable words. Selected users write 23 more pages for upgrading the model. We upgrade the model with Curriculum Learning (CL) on the data available in the current iteration and compare the subset from previous iterations. The upgraded model is tested on a held-out set of one page each from all 23 users. We provide insights into our investigations on the effect of CL, user selection, and especially the data from unseen writing styles. Our work can be used for long-term OCR services in crowd-sourcing scenarios for the service providers and end users.
translated by 谷歌翻译
We introduce LaViLa, a new approach to learning video-language representations by leveraging Large Language Models (LLMs). We repurpose pre-trained LLMs to be conditioned on visual input, and finetune them to create automatic video narrators. Our auto-generated narrations offer a number of advantages, including dense coverage of long videos, better temporal synchronization of the visual information and text, and much higher diversity of text. The video-text embedding learned contrastively with these additional auto-generated narrations outperforms the previous state-of-the-art on multiple first-person and third-person video tasks, both in zero-shot and finetuned setups. Most notably, LaViLa obtains an absolute gain of 10.1% on EGTEA classification and 5.9% Epic-Kitchens-100 multi-instance retrieval benchmarks. Furthermore, LaViLa trained with only half the narrations from the Ego4D dataset outperforms baseline models trained on the full set, and shows positive scaling behavior on increasing pre-training data and model size.
translated by 谷歌翻译
We explore unifying a neural segmenter with two-pass cascaded encoder ASR into a single model. A key challenge is allowing the segmenter (which runs in real-time, synchronously with the decoder) to finalize the 2nd pass (which runs 900 ms behind real-time) without introducing user-perceived latency or deletion errors during inference. We propose a design where the neural segmenter is integrated with the causal 1st pass decoder to emit a end-of-segment (EOS) signal in real-time. The EOS signal is then used to finalize the non-causal 2nd pass. We experiment with different ways to finalize the 2nd pass, and find that a novel dummy frame injection strategy allows for simultaneous high quality 2nd pass results and low finalization latency. On a real-world long-form captioning task (YouTube), we achieve 2.4% relative WER and 140 ms EOS latency gains over a baseline VAD-based segmenter with the same cascaded encoder.
translated by 谷歌翻译
Damage to the inferior frontal gyrus (Broca's area) can cause agrammatic aphasia wherein patients, although able to comprehend, lack the ability to form complete sentences. This inability leads to communication gaps which cause difficulties in their daily lives. The usage of assistive devices can help in mitigating these issues and enable the patients to communicate effectively. However, due to lack of large scale studies of linguistic deficits in aphasia, research on such assistive technology is relatively limited. In this work, we present two contributions that aim to re-initiate research and development in this field. Firstly, we propose a model that uses linguistic features from small scale studies on aphasia patients and generates large scale datasets of synthetic aphasic utterances from grammatically correct datasets. We show that the mean length of utterance, the noun/verb ratio, and the simple/complex sentence ratio of our synthetic datasets correspond to the reported features of aphasic speech. Further, we demonstrate how the synthetic datasets may be utilized to develop assistive devices for aphasia patients. The pre-trained T5 transformer is fine-tuned using the generated dataset to suggest 5 corrected sentences given an aphasic utterance as input. We evaluate the efficacy of the T5 model using the BLEU and cosine semantic similarity scores. Affirming results with BLEU score of 0.827/1.00 and semantic similarity of 0.904/1.00 were obtained. These results provide a strong foundation for the concept that a synthetic dataset based on small scale studies on aphasia can be used to develop effective assistive technology.
translated by 谷歌翻译
This work introduces the novel task of Source-free Multi-target Domain Adaptation and proposes adaptation framework comprising of \textbf{Co}nsistency with \textbf{N}uclear-Norm Maximization and \textbf{Mix}Up knowledge distillation (\textit{CoNMix}) as a solution to this problem. The main motive of this work is to solve for Single and Multi target Domain Adaptation (SMTDA) for the source-free paradigm, which enforces a constraint where the labeled source data is not available during target adaptation due to various privacy-related restrictions on data sharing. The source-free approach leverages target pseudo labels, which can be noisy, to improve the target adaptation. We introduce consistency between label preserving augmentations and utilize pseudo label refinement methods to reduce noisy pseudo labels. Further, we propose novel MixUp Knowledge Distillation (MKD) for better generalization on multiple target domains using various source-free STDA models. We also show that the Vision Transformer (VT) backbone gives better feature representation with improved domain transferability and class discriminability. Our proposed framework achieves the state-of-the-art (SOTA) results in various paradigms of source-free STDA and MTDA settings on popular domain adaptation datasets like Office-Home, Office-Caltech, and DomainNet. Project Page: https://sites.google.com/view/conmix-vcl
translated by 谷歌翻译
艺术是一种使用数字技术作为生成或创造过程的一部分的艺术方法。随着数字货币和NFT(不可杀死的代币)的出现,对数字艺术的需求正在积极增长。在本手稿中,我们主张将深层生成网络和对抗性训练进行稳定和变体的艺术生成的概念。这项工作主要集中于使用深卷积生成对抗网络(DC-GAN),并探讨了解决GAN训练中常见陷阱的技术。我们比较DC-GAN的各种架构和设计,以为稳定而逼真的一代提供推荐的设计选择。这项工作的主要重点是生成现实中不存在但由提议的模型从随机噪声中合成的逼真图像。我们提供了生成的动物面部图像(一些显示物种混合物的证据)的视觉结果以及训练,建筑和设计选择的建议。我们还展示了训练图像预处理如何在GAN培训中起着重要作用。
translated by 谷歌翻译
该手稿解决了预测出院后全因住院再入院或死亡的同时问题,并量化放电放置在防止这些不良事件中的影响。为此,我们开发了一个固有的可解释的多级贝叶斯建模框架,该框架灵感来自重新激活的深神经网络的分段线性。在生存模型中,我们明确调整了混淆,以量化局部平均治疗效果以进行放电的干预措施。从2008年和2011年开始,我们对5%的Medicare受益人样本进行了培训,然后在2012年的索赔中测试了该模型。该模型对30天全因素外的再选中(使用官方CMS方法定义)的分类精度进行了评估,该模型对XGBoost,Logistic回归(功能工程后)和对同一数据进行训练的贝叶斯深神经网络的执行方式相似。该模型对30天的分类任务进行了预测的30天分类任务,该任务是使用剩下的未来数据进行测试,该模型的AUROC约为0.76,AUPRC约为0.50(相对于测试数据中的总体阳性速率),AUPRC的AUPRC达到了约0.76,而AUPRC的AUPRC则达到了AUPRC,则获得了AUPRC。证明人们不需要为准确性而牺牲可解释性。此外,该模型的测试AUROC为0.78,分类为90天全因素外再入院或死亡。我们很容易地凝视着我们固有的可解释模型,总结了其主要发现。此外,我们演示了Black-box Perthoc解释器工具的形状如何生成不受拟合模型支持的解释 - 如果以面值为单位,则没有提供足够的上下文来使模型可操作。
translated by 谷歌翻译
该报告涵盖了我们对Chaplot等人的“使用变压器的可区分空间计划”的复制工作。。在本文中,考虑了以可不同方式进行空间路径计划的问题。他们表明,他们提出的使用空间规划变压器的方法优于先前数据驱动的模型,并利用可不同的结构来学习映射而无需同时地面真相图。我们通过重现其实验并在新数据上测试其方法来验证这些主张。我们还通过地图提高了障碍物复杂性,研究了计划准确性的稳定性。努力调查和验证映射模块的学习的努力是由于缺乏计算资源和无法到达的作者而导致的失败。
translated by 谷歌翻译